首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   94969篇
  免费   1900篇
  国内免费   922篇
电工技术   1498篇
技术理论   1篇
综合类   3036篇
化学工业   13229篇
金属工艺   5313篇
机械仪表   3810篇
建筑科学   2949篇
矿业工程   976篇
能源动力   1413篇
轻工业   4273篇
水利工程   1480篇
石油天然气   1213篇
武器工业   95篇
无线电   10497篇
一般工业技术   17603篇
冶金工业   3229篇
原子能技术   385篇
自动化技术   26791篇
  2024年   32篇
  2023年   210篇
  2022年   304篇
  2021年   495篇
  2020年   408篇
  2019年   362篇
  2018年   14753篇
  2017年   13694篇
  2016年   10265篇
  2015年   1058篇
  2014年   789篇
  2013年   826篇
  2012年   3928篇
  2011年   10158篇
  2010年   8957篇
  2009年   6227篇
  2008年   7385篇
  2007年   8390篇
  2006年   784篇
  2005年   1730篇
  2004年   1498篇
  2003年   1484篇
  2002年   845篇
  2001年   360篇
  2000年   440篇
  1999年   353篇
  1998年   293篇
  1997年   240篇
  1996年   221篇
  1995年   164篇
  1994年   147篇
  1993年   90篇
  1992年   83篇
  1991年   79篇
  1990年   54篇
  1989年   32篇
  1988年   31篇
  1969年   24篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1963年   28篇
  1960年   30篇
  1959年   37篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Recent years have witnessed a growing number of publications dealing with the imbalanced learning issue. While a plethora of techniques have been investigated on traditional low-dimensional data, little is known on the effect thereof on behaviour data. This kind of data reflects fine-grained behaviours of individuals or organisations and is characterized by sparseness and very large dimensions. In this article, we investigate the effects of several over-and undersampling, cost-sensitive learning and boosting techniques on the problem of learning from imbalanced behaviour data. Oversampling techniques show a good overall performance and do not seem to suffer from overfitting as traditional studies report. A variety of undersampling approaches are investigated as well and show the performance degrading effect of instances showing odd behaviour. Furthermore, the boosting process indicates that the regularization parameter in the SVM formulation acts as a weakness indicator and that a combination of weak learners can often achieve better generalization than a single strong learner. Finally, the EasyEnsemble technique is presented as the method outperforming all others. By randomly sampling several balanced subsets, feeding them to a boosting process and subsequently combining their hypotheses, a classifier is obtained that achieves noise/outlier reduction effects and simultaneously explores the majority class space efficiently. Furthermore, the method is very fast since it is parallelizable and each subset is only twice as large as the minority class size.  相似文献   
992.
Statistical evaluation of biclustering solutions is essential to guarantee the absence of spurious relations and to validate the high number of scientific statements inferred from unsupervised data analysis without a proper statistical ground. Most biclustering methods rely on merit functions to discover biclusters with specific homogeneity criteria. However, strong homogeneity does not guarantee the statistical significance of biclustering solutions. Furthermore, although some biclustering methods test the statistical significance of specific types of biclusters, there are no methods to assess the significance of flexible biclustering models. This work proposes a method to evaluate the statistical significance of biclustering solutions. It integrates state-of-the-art statistical views on the significance of local patterns and extends them with new principles to assess the significance of biclusters with additive, multiplicative, symmetric, order-preserving and plaid coherencies. The proposed statistical tests provide the unprecedented possibility to minimize the number of false positive biclusters without incurring on false negatives, and to compare state-of-the-art biclustering algorithms according to the statistical significance of their outputs. Results on synthetic and real data support the soundness and relevance of the proposed contributions, and stress the need to combine significance and homogeneity criteria to guide the search for biclusters.  相似文献   
993.
This paper addresses the problem of visual simultaneous localization and mapping (SLAM) in an unstructured seabed environment that can be applied to an unmanned underwater vehicle equipped with a single monocular camera as the main measurement sensor. Monocular vision is regarded as an efficient sensing option in the context of SLAM, however it poses a variety of challenges when the relative motion is determined by matching a pair of images in the case of in-water visual SLAM. Among the various challenges, this research focuses on the problem of loop-closure which is one of the most important issues in SLAM. This study proposes a robust loop-closure algorithm in order to improve operational performance in terms of both navigation and mapping by efficiently reconstructing image matching constraints. To demonstrate and evaluate the effectiveness of the proposed loop-closure method, experimental datasets obtained in underwater environments are used, and the validity of the algorithm is confirmed by a series of comparative results.  相似文献   
994.
This study presents the generation of a nonlinear autoregressive exogenous model (NARX) for wind speed forecasting in a 1 h, in advance horizon. A sample of meteorological data of hourly measurements taken during a year was used for the generation of the model. The variables measured were as follows: wind speed, wind direction, solar radiation, pressure, and temperature. All measurements were taken by the Comision Federal de Electricidad (CFE) at La Mata, in the state of Oaxaca, Mexico. Using the Mahalanobis distance, the sample of data was treated in order to detect deviated values in multivariable samples. Later on, the statistical Granger test was conducted to establish the entry variables that would be incorporated into the model. Since solar radiation was the only one determined as the cause for wind speed, it was the variable used in the configuration of the model. To compare the NARX model, a one-variable, nonlinear autoregressive model (NAR) was also generated. Both models, the NARX and the NAR were compared against the persistence model by means of applying the statistical error forecast measurements of mean absolute error, mean squared error, and mean absolute percentage error to the test data. The results showed the NARX model as the most precise of the three, reflecting the importance of the inclusion of additional meteorological variables in the wind speed forecasting models.  相似文献   
995.
The kinetic freeze-out temperatures, T_0, in nucleus–nucleus collisions at the Relativistic Heavy Ion Collider(RHIC) and Large Hadron Collider(LHC) energies are extracted by four methods:(1) the Blast-Wave model with Boltzmann–Gibbs statistics(the BGBW model),(2) the Blast-Wave model with Tsallis statistics(the TBW model),(3) the Tsallis distribution with flow effect(the improved Tsallis distribution), and(4) the intercept in T=T_0+ am_0(the alternative method), where m_0 denotes the rest mass and T denotes the effective temperature which can be obtained by different distribution functions. It is found that the relative sizes of T_0 in central and peripheral collisions obtained by the conventional BGBW model which uses a zero or nearly zero transverse flow velocity, β_T, are contradictory in tendency with other methods. With a re-examination for β_Tin the first method,in which β_Tis taken to be ~ (0:40 ± 0:07)c, a recalculation presents a consistent result with others. Finally, our results show that the kinetic freeze-out temperature in central collisions is larger than that in peripheral collisions.  相似文献   
996.
Multilayered auto-associative neural architectures have widely been used in empirical sensor modeling. Typically, such empirical sensor models are used in sensor calibration and fault monitoring systems. However, simultaneous optimization of related performance metrics, i.e., auto-sensitivity, cross-sensitivity, and fault-detectability, is not a trivial task. Learning procedures for parametric and other relevant non-parametric empirical models are sensitive to optimization and regularization methods. Therefore, there is a need for active learning strategies that can better exploit the underlying statistical structure among input sensors and are simple to regularize and fine-tune. To this end, we investigated the greedy layer-wise learning strategy and denoising-based regularization procedure for sensor model optimization. We further explored the effects of denoising-based regularization hyper-parameters such as noise-type and noise-level on sensor model performance and suggested optimal settings through rigorous experimentation. A visualization procedure was introduced to obtain insight into the internal semantics of the learned model. These visualizations allowed us to suggest an implicit noise-generating process for efficient regularization in higher-order layers. We found that the greedy-learning procedure improved the overall robustness of the sensor model. To keep experimentation unbiased and immune to noise-related artifacts in real sensors, the sensor data were sampled from simulators of a nuclear steam supply system of a pressurized water reactor and a Tennessee Eastman chemical process. Finally, we compared the performance of an optimally regularized sensor model with auto-associative neural network, auto-associative kernel regression, and fuzzy similarity-based sensor models.  相似文献   
997.
998.
Energy audits can provide businesses with valuable information and advice about their current energy use and costs and where changes can be made to improve both the environmental performance of the business and their triple bottom line. There is literature on the use of energy audits outside Australia, but little provision of empirical data that would help a business decide whether engagement is economically viable. This paper reports on 49 SME energy audits in Australia, and the results verify that economic returns can vary significantly and alone are not a reason for most SMEs to act. Thus, more will need to be done to engage them as their collective impact on the environment should be a catalyst for action and support from stakeholders.  相似文献   
999.
Blending of two or more pure polymers is an effective way to produce composites with tunable properties. In this paper, we report dynamic Monte Carlo simulation results on the crystallization of crystalline/crystalline (A/B) symmetric binary polymer blend, wherein the melting temperature of A-polymer is higher than B-polymer. We study the effect of segregation strength (arises from the immiscibility between A- and B-polymers) on crystallization and morphological development. Crystallization of A-polymer precedes the crystallization of B-polymer upon cooling from a homogeneous melt. Simulation results reveal that the morphological development is controlled by the interplay between crystallization driving force (viz., attractive interaction) and de-mixing energy (viz., repulsive interaction between two polymers). With increasing segregation strength, the interface becomes more rigid and restricts the development of crystalline structures. Mean square radius of gyration shows a decreasing trend with increasing segregation strength, reflecting the increased repulsive interaction between A- and B-polymers. As a consequence, a large number of smaller size crystals form with lesser crystallinity. Isothermal crystallization reveals that the transition pathways strongly depend on segregation strength. We also observe a path-dependent crystallization behavior in isothermal crystallization: two-step (sequential) isothermal crystallization yields superior crystalline structure in both A- and B-polymers than one-step (coincident) crystallization.  相似文献   
1000.
Nanocrystalline materials show many interesting properties such as high strength and hardness due to nanosized grains and high density of interfaces. In this context, the present work reports the effect of Fe (iron) addition in Ni (nickel) on nanostructure retention during the annealing of Ni-Fe alloy (with 0, 18.5, 28.5 and 43 wt% Fe) at 450 °C for 16 h. Furthermore, effect of annealing on the deformation mechanism was investigated. The integral breadth method revealed the decrease in grain size with increase in wt% Fe in Ni. The strain rate sensitivity exponent which is a signature of operating deformation mechanism showed a higher value (0.10803) in case of Ni-18.5 wt% Fe during nanoindentation. However, Ni-0 wt% Fe, Ni-28.5 wt% Fe and Ni-43 wt% Fe were characterized by a relatively low strain rate sensitivity exponent (between 0.02069 and 0.10803). Results indicated the presence of Hall-Petch relationship up to 18.5 wt% Fe and inverse Hall-Petch relationship above 18.5 wt% Fe.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号